Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Fluid imbibition into porous media featuring nanopores is ubiquitous in applications such as oil recovery from unconventional reservoirs and material processing. While the imbibition of pure fluids has been extensively studied, the imbibition of fluid mixtures is little explored. Here, we report the molecular dynamics study of the imbibition of model crude oil into nanometer-wide mineral pores, both when pore walls are dry and prewetted by residual water films. Results show the fastest imbibition and fastest propagation of molecularly thin precursor films ahead of the oil meniscus in the dry pore system. The presence of thin water films on pore walls corresponding to an environmental relative humidity of 30% slows down but still allows the spontaneous imbibition of single-component oil. Introducing polar components into the oil slows down the imbibition into dry nanopores, due partly to the clogging of the pore entrance. Strong selectivity toward nonpolar oil is evident. The slowdown of imbibition by polar oil is less significant in the prewetted pores than in dry pores, but the selectivity toward nonpolar oil remains strong.more » « lessFree, publicly-accessible full text available July 1, 2026
-
The ability to accurately interpret complex visual information is a crucial topic of multimodal large language models (MLLMs). Recent work indicates that enhanced visual perception significantly reduces hallucinations and improves performance on resolution-sensitive tasks, such as optical character recognition and document analysis. A number of recent MLLMs achieve this goal using a mixture of vision encoders. Despite their success, there is a lack of systematic comparisons and detailed ablation studies addressing critical aspects, such as expert selection and the integration of multiple vision experts. This study provides an extensive exploration of the design space for MLLMs using a mixture of vision encoders and resolutions. Our findings reveal several underlying principles common to various existing strategies, leading to a streamlined yet effective design approach. We discover that simply concatenating visual tokens from a set of complementary vision encoders is as effective as more complex mixing architectures or strategies. We additionally introduce Pre-Alignment to bridge the gap between vision-focused encoders and language tokens, enhancing model coherence. The resulting family of MLLMs, Eagle, surpasses other leading open-source models on major MLLM benchmarks.more » « lessFree, publicly-accessible full text available April 24, 2026
An official website of the United States government

Full Text Available